翻訳と辞書
Words near each other
・ Maximum Contaminant Level
・ Maximum Conviction
・ Maximum coverage problem
・ Maximum cut
・ Maximum Darkness
・ Maximum demand indicator
・ Maximum density
・ Maximum Destruction
・ Maximum Diner
・ Maximum disjoint set
・ Maximum Downside Exposure (MDE)
・ Maximum Drive
・ Maximum elevation figure
・ Maximum engineering data rate
・ Maximum entropy
Maximum entropy probability distribution
・ Maximum entropy spectral estimation
・ Maximum entropy thermodynamics
・ Maximum Experimental Safe Gap
・ Maximum Exposure
・ Maximum Fantastic Four
・ Maximum Fighting Championship
・ Maximum Fighting Championship (2012) events
・ Maximum flow problem
・ Maximum Force
・ Maximum Fun
・ Maximum Games
・ Maximum Groove
・ Maximum harmonisation
・ Maximum Homerdrive


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Maximum entropy probability distribution : ウィキペディア英語版
Maximum entropy probability distribution

In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class (usually defined in terms of specified properties or measures), then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.
== Definition of entropy and differential entropy ==

If ''X'' is a discrete random variable with distribution given by
:\operatorname(X=x_k) = p_k \quad\mbox k=1,2,\ldots
then the entropy of ''X'' is defined as
:H(X) = - \sum_p_k\log p_k \;.
If ''X'' is a continuous random variable with probability density ''p''(''x''), then the differential entropy of ''X'' is defined as〔Williams, D. (2001) ''Weighing the Odds'' Cambridge UP ISBN 0-521-00618-X (pages 197-199)〕〔Bernardo, J.M., Smith, A.F.M. (2000) ''Bayesian Theory'.' Wiley. ISBN 0-471-49464-X (pages 209, 366)〕〔O'Hagan, A. (1994) ''Kendall's Advanced Theory of statistics, Vol 2B, Bayesian Inference'', Edward Arnold. ISBN 0-340-52922-9 (Section 5.40)〕
:H(X) = - \int_^\infty p(x)\log p(x) dx\;.
''p''(''x'') log ''p''(''x'') is understood to be zero whenever ''p''(''x'') = 0.
This is a special case of more general forms described in the articles Entropy (information theory), Principle of maximum entropy, and Differential entropy. In connection with maximum entropy distributions, this is the only one needed, because maximizing H(X) will also maximize the more general forms.
The base of the logarithm is not important as long as the same one is used consistently: change of base merely results in a rescaling of the entropy. Information theorists may prefer to use base 2 in order to express the entropy in bits; mathematicians and physicists will often prefer the natural logarithm, resulting in a unit of nats for the entropy.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Maximum entropy probability distribution」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.